perm filename DOYLE.2[S80,JMC] blob
sn#517178 filedate 1980-06-18 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Modelling Deliberation, Action and Introspection
C00014 ENDMK
Cā;
Modelling Deliberation, Action and Introspection
This is a proposal to extend the work of the Formal
Reasoning Group of the Stanford Artificial Intelligence Laboratory
to study programs modelling deliberation, action and introspection.
The work will be done by Jon Doyle and by John McCarthy.
Doyle's work requires additional DARPA support amounting to $62,055
for the period October 1, 1980 thru October 31, 1981
as described in the budgetary section of this proposal.
No additional support is requested for McCarthy.
Advanced intelligent computer programs must reason about
the effects of their potential future actions, and this includes
reasoning about their own ability to solve problems by reason and
action. A promising approach is to regard reasoning itself as
a species of action that whose effects can be reasoned about.
Thus the program must reason about what it would
be able to do or would do in hypothetical future circumstances.
Carrying this out accurately and effectively involves self-observation
akin to human introspection.
Self-observation includes making and examining traces of inferences
made so that when a conclusion has to be revised, the reasoning
that led to it can be identified and the assumption or conjectural
conclusion that has to be taken back can be located.
Much human decision making involves processes that may be
called %2dialectical argumentation%1. Reasons for and against
a contemplated course of action are developed and played against
one another. This process, which we believe is also required for
advanced computer intelligence, is quite different from the mathematical
deductions heretofore carried out by computer programs. In particular
it involves non-monotonic reasoning of the kinds recently studied
by McCarthy (1980), McDermott and Doyle (1980), Doyle (1978) and Reiter
(1980).
For example, consider what happens when a person proposes
an argument and then thinks of a counter-argument. The original
argument leads to a certain conclusion. If this conclusion were
a logical consequence, in the sense of mathematical logic, then
no additional considerations would change the conclusion unless
some of the premises were found to be incorrect. Actually such
arguments are almost never logical deductions but non-monotonic
consequences of two kinds.
One kind of non-monotonic conclusion is the default. A default
is represented by a sentence that is taken to be true provided
other sentences being considered don't refute it. An example from
Doyle (1978) is %2"The meeting is on Wednesday unless there is
a reason why not"%1. A program will use this default to conclude
that the meeting is on Wednesday unless it has a sentence asserting
something incompatible like a conflicting meeting on Wednesday.
Another kind of non-monotonic consequence occurs when the
facts at a person's or program's disposal show the existence of
certain objects of a given kind, and the person or program
concludes that these are all of the objects of the given kind.
Thus we may know that a boat has a leak and lacks oars, and we
may conjecture (non-monotonically) that these are the only "things"
wrong with the boat.
Argument, whether with another person or with oneself, often
involves finding reasons permitting such non-monotonicly obtained
conclusions and then finding counter arguments. In the above
examples, a counter argument might involve a non-monotonic deduction
that there is another group in Wednesday's meeting room or that
the boat also has a broken rudder. Counter-counter arguments may
involve asserting the existence of another room or a plan for
fixing the rudder.
We propose to extend the work of our Formal Reasoning Group
to develop theories and write programs that will decide what to
do by reasoning that includes introspection and dialectical argumentation.
This work will be based on ideas in Jon Doyle's (1980) PhD
dissertation and on approaches to non-monotonic reasoning by
Doyle and by McCarthy.
Doyle's thesis investigates the problem of controlling or
directing the reasoning and actions of a computer program. The basic
approach explored is to view reasoning as a species of action, so that a
program might apply its reasoning powers to the task of deciding what
inferences to make as well as deciding what other actions to take. A
design for the architecture of reasoning programs is proposed. This
architecture involves self-consciousness, intentional actions, deliberate
adaptations, and a form of decision-making based on dialectical
argumentation. A program based on this architecture inspects itself,
describes aspects of itself to itself, and uses this self-reference and
these self-descriptions in making decisons and taking actions. The
program's mental life includes awareness of its own concepts, beliefs,
desires, intentions, inferences, actions, and skills. All of these are
represented by self-descriptions in a single sort of language, so that the
program has access to all of these aspects of itself, and can reason about
them in the same terms.
During the thirteen month period of this addition to
work, the studies will mainly be conceptual. This is
partly because these ideas require additional theoretical work
before they can be embodied in programs and partly because
programs that keep a trace of their own reasoning processes
may require bigger memories than are currently available for
single processes at Stanford. In any case, implementation
will be a large enough project to require very careful planning.
By the end of 1981, we will have planned a system that
will be able to reason about its own actions and "thoughts" and
carry out internal arguments as well as simpler forms of non-monotonic
reasoning.